HAR is a field of study that aims to develop algorithms and techniques for automatically identifying and classifying human activities. Different people can execute an activity in various ways, and the same action might have numerous versions.
Modern human activity recognition systems are largely taught and utilised on video stream and picture data in order to identify the features and activities, variations in the data having similar or related movements. Human-to-human and human-computer contact both heavily rely on human activity recognition. Systems that are manually operated take much longer and cost more money. The goal of this project is to develop a quicker, more affordable system for recognizing human activity and detect threats proactively through live camera. This system will help the end user in a variety of applications, such as surveillance and assistance in examination centres, hospitals, restricted areas, banks etc. by identifying the activity being carried out in the video or picture and give out alerts if detected any suspicious activity. Not only will this system be affordable, but it also functions as a utility-based system that can be integrated into a variety of applications to speed up and assist with various tasks that require recognition, which will result in significant time savings and high accuracy.
Introduction
I. INTRODUCTION
The increasing demand for advanced security measures in military restricted areas has prompted the development of various technologies to aid in monitoring and safeguarding these areas. One such technology is Human Activity Recognition (HAR), which utilizes machine learning algorithms to identify and classify human actions and behaviours. HAR methods use deep learning techniques, such as CNN and RNN to process data and classify Human activities. In this project, our aim is to develop a HAR model for military restricted areas. We are using here Google's Teachable Machine. It is web-based platform available in Python and JavaScript which works by training the model and classifying activities according to the provided datasets to the model. The objective of our study is to create a real-time system that can accurately detect and classify human activities, thereby enhancing security and situational awareness in restricted areas. In this paper, we present the methodology and results of our study, highlighting the potential benefits of this approach for improving security measures in military environments and we are committed to contributing to the development of innovative and effective solutions for military security. Overall, HAR has the potential to revolutionize many fields and improve our understanding of human behaviour. With continued research and development, HAR will become increasingly accurate and reliable, enabling a wide range of applications and benefits for society.
II. SYSTEM DESIGN
The model detects humans and recognize their actions or movements in the surrounding. The model is trained in such a way that it classifies the activities based on the given conditions, and give out an alert if found suspicious.
We have used a very popular deep learning technology for detecting human and classifying the activities performed by human. We have used teachable machine for video capturing, pose detection and preditcion. Teachable machine is a TensorFlow library which is used in python and JavaScript both. It works by training the models by providing datasets and using them to classify the activities without any external source. Our system works by capturing the frames from live surveillance through web cam or camera.
The captured data is further trained by the deep learning techniques by providing the architectural algorithm, no. of epochs, batch size, learning rate and no. of layers for feature extraction. This data is used to classify the activities based on our trained model.
The working of our model is like when the application starts, live capturing begins. Live capturing is done by the web cam or camera. On detection of a person in the camera the activities of that person will be tracked based on what the model is trained. The screen will display the classification of that particular activity either into normal or suspicious.
When an activity is found to be illegal or suspicious an alarm is raised in order to aware the admin about something suspicious happening in the restricted area. In addition to alarm, we have set a calling feature which will automatically make a call to the admin to make him respond quickly to the ongoing suspicious activity.
If an activity is found to be legal the flow will start again. The model will be used to prevent suspicious activities and will ensure security to the restricted areas.
III. SYSTEM IMPLEMENTATION
The implementation of this system has different phases like Data collection, pre-processing, activity recognition and alert generation.
The first step is to collect the data. Teachable machine gives us flexibility of selecting any type of data that model needs to classify such as images, sounds or poses. So here we made use of the Pose model. The web cam will start capturing the data for training the model. Here, we have created our own dataset to train the model.
A. Pre-processing
The input data is pre-processed and converted into the format that can be used by ML algorithm.
User should create the classes with labels that he/she wants the model to classify. Pre-processing involves some techniques such as Normalization, scaling, Feature Extraction and segmentation. After identifying the relevant features, Model will be used to train the data.
B. Train Model
This model is trained using our own dataset.
In teachable machine models are trained using classes, we need to create various classes and assign labels to them accordingly. Model is trained using labelled data to classify into normal or suspicious class. If the data is found to be normal, model will work properly and if found illegal it will give out an alert.
When an image is taken as data input from the web cam it is further trained to make predictions. While the training phase, the data goes through Number of feature extraction techniques using CNN methods in order to extract proper features from it. Later the predictions are made based on the trained model and different classes are given out as results provided accuracy of each class.
C. Test Model
This step helps to determine the accuracy and performance of the trained model.
IV. ALGORITHM
We have used a very popular deep learning technology for detecting human and classifying the activities performed by human. Live Video Capturing will start whenever the admin starts the Video Surveillance.
Input is taken from live video Capturing.
Human detection by using Teachable machine.
Data Preparation and feature extraction using CNN algorithm.
Activity classification using Tensorflow.js (model) Between Legal and illegal.
On suspicious activity detection alarm is given and call is sent to admin.
Conclusion
This paper provides a real time solution for surveillance of restricted military areas. We propose to implement the given solution with the help of Machine learning techniques and a comparative study of different methodologies. Teachable machine substantially increases the output by automatically learning features from raw data, making motion tracking a promising application. In addition, we demonstrate that our method is accurate and versatile. It can recognize the human actions and detect the illegal or suspicious activities accordingly assuring security of the areas. Although the proposed method can achieve better results than other methods for activity detection, the accuracy of activity recognition still needs to be improved.
References
[1] J. Okai, S. Paraschiakos, M. Beekman, A. Knobbe and C. R. de Sá, \"Building robust models for Human Activity Recognition from raw accelerometers data using Gated Recurrent Units and Long Short Term Memory Neural Networks,\" 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2019, pp.
[2] E. W. Sinuraya, A. Rizal, Y. A. A. Soetrisno and Denis, \"Performance Improvement of Human Activity Recognition based on Ensemble Empirical Mode Decomposition (EEMD),\" 2018 5th International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE), 2018, pp. 2486-2491.
[3] S. Mekruksavanich and A. Jitpattanakul, \"Smartwatch-based Human Activity Recognition Using Hybrid LSTM Network,\" 2020 IEEE SENSORS, 2020, pp. 1-4.
[4] T. Uzunovic, E. Golubovic, Z. Tucakovic, Y. Acikmese and A. Sabanovic, \"Task-Based Control and Human Activity Recognition for Human-Robot Collaboration,\" IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society, 2018, pp. 5110-5115.
[5] S. Oniga and J. Süt?, \"Human activity recognition using neural networks,\" Proceedings of the 2014 15th International Carpathian Control Conference (ICCC), 2014, pp. 403-406.
[6] J. Parkka, M. Ermes, P. Korpipaa, J. Mantyjarvi, J. Peltola, and I. Korhonen, “Activity classification using realistic data from wearable sensors,” IEEE Trans. Inf. Technol. Biomed., vol. 10, no. 1, pp. 119–128, Jan. 2006.
[7] Jorge Luis Reyes-Ortiz, Alessandro Ghio, Xavier Parra-Llanas, Davide Anguita, Joan Cabestany, Andreu Català. Human Activity and Motion Disorder Recognition: Towards Smarter Interactive Cognitive Environments. 21st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013.
[8] Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. Human Activity Recognition on Smartphones using a Multiclass Hardware-Friendly Support Vector Machine. 4th International Workshop of Ambient Assited Living, IWAAL 2012, Vitoria-Gasteiz, Spain, December 3-5, 2012. Proceedings. Lecture Notes in Computer Science 2012, pp 216-223.
[9] Singh, D. et al. (2017). “Human Activity Recognition Using Recurrent Neural Networks.” In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2017. Lecture Notes in Computer Science(), vol 10410. Springer, Cham. R. R. Koli and T. I. Bagban, \"Human Action Recognition Using Deep Neural Networks,\"
[10] Shalu Gautam, Phulpreet Kaur, Dr. M.Gangadharappa, “An Overview of Human Activity Recognition from Recordings”, International Conference on Advances in Computing, Communication Control and Networking (ICACCCN2018), 2018 IEEE.